65 research outputs found

    A Hybrid ICA-Wavelet Transform for Automated Artefact Removal in EEG-based Emotion Recognition

    Get PDF

    Emotion Imagery BCI

    Get PDF

    Applications of gravitational search algorithm in engineering

    Get PDF
    Gravitational search algorithm (GSA) is a nature-inspired conceptual framework with roots in gravitational kinematics, a branch of physics that models the motion of masses moving under the influence of gravity. In a recent article the authors reviewed the principles of GSA. This article presents a review of applications of GSA in engineering including combinatorial optimization problems, economic load dispatch problem, economic and emission dispatch problem, optimal power flow problem, optimal reactive power dispatch problem, energy management system problem, clustering and classification problem, feature subset selection problem, parameter identification, training neural networks, traveling salesman problem, filter design and communication systems, unit commitment problem and multiobjective optimization problems

    Brief history of natural sciences for nature-inspired computing in engineering

    Get PDF
    The goal of the authors is adroit integration of three mainstream disciplines of the natural sciences, physics, chemistry and biology to create novel problem solving paradigms. This paper presents a brief history of the develop­ment of the natural sciences and highlights some milestones which subsequently influenced many branches of science, engineering and computing as a prelude to nature-inspired computing which has captured the imagination of computing researchers in the past three decades. The idea is to summarize the massive body of knowledge in a single paper suc­cinctly. The paper is organised into three main sections: developments in physics, developments in chemistry, and de­velopments in biology. Examples of recently-proposed computing approaches inspired by the three branches of natural sciences are provided

    Facial expression recognition on partial facial sections

    Get PDF

    Using Reinforcement Learning to Attenuate for Stochasticity in Robot Navigation Controllers

    Get PDF
    International audienceBraitenberg vehicles are bio-inspired controllers for sensor-based local navigation of wheeled robots that have been used in multiple real world robotic implementations. The common approach to implement such non-linear control mechanisms is through neural networks connecting sensing to motor action, yet tuning the weights to obtain appropriate closed-loop navigation behaviours can be very challenging. Standard approaches used hand tuned spiking or recurrent neural networks, or learnt the weights of feedforward networks using evolutionary approaches. Recently, Reinforcement Learning has been used to learn neural controllers for simulated Braitenberg vehicle 3a-a bio-inspired model of target seeking for wheeled robots-under the assumption of noiseless sensors. Real sensors, however, are subject to different levels of noise, and multiple works have shown that Braitenberg vehicles work even on outdoor robots, demonstrating that these control mechanisms work in harsh and dynamic environments. This paper shows that a robust neural controller for Braitenberg vehicle 3a can be learnt using policy gradient reinforcement learning in scenarios where sensor noise plays a non negligible role. The learnt controller is robust and tries to attenuate the effects of noise in the closed-loop navigation behaviour of the simulated stochastic vehicle. We compare the neural controller learnt using Reinforcement Learning with a simple hand tuned controller and show how the neural control mechanism outperforms a naïve controller. Results are illustrated through computer simulations of the closed-loop stochastic system

    Concurrent Skill Composition using Ensemble of Primitive Skills

    Get PDF
    One of the key characteristics of an open-ended cumulative learning agent is that it should use the knowledge gained from prior learning to solve future tasks. That characteristic is especially essential in robotics, as learning every perception-action skill from scratch is not only time consuming but may not always be feasible. In the case of reinforcement learning, this learned knowledge is called a policy. The lifelong learning agent should treat the policies of learned tasks as building blocks to solve those future tasks. One of the categorizations of tasks is based on its composition, ranging from primitive tasks to compound tasks that are either a sequential or concurrent combination of primitive tasks. Thus, the agent needs to be able to combine the policies of the primitive tasks to solve compound tasks, which are then added to its knowledge base. Inspired by modular neural networks, we propose an approach to compose policies for compound tasks that are concurrent combinations of disjoint tasks. Furthermore, we hypothesize that learning in a specialized environment leads to more efficient learning; hence, we create scaffolded environments for the robot to learn primitive skills for our mobile robot-based experiments. We then show how the agent can combine those primitive skills to learn solutions for compound tasks. That reduces the overall training time of multiple skills and creates a versatile agent that can mix and match the skills.</p
    corecore